Visual 1st to address how to refer to AI generated images
Getting your Trinity Audio player ready...
|
Artificial intelligence’s ability to produce images that don’t originate in the real world but are indistinguishable from photographs is upending traditional attitudes and perceptions. The broad availability of this technology is enabling explosive growth of these photorealistic – but not real – images: Adobe recently announced its generative artificial intelligence product Firefly has been used to create more than a billion images since its launch in March 2023, just over six months ago. A study by Everypixel Journal estimates the total for the four main platforms (DALL-E, Midjourney, Stable Diffusion, and Firefly) to be more than 15 billion images in the past year, according to a press release from Visual 1st.
This spectacular innovation is, however, a double-edged sword: It offers great economic benefits to many industries where visually communicating products or services is critically important. It’s also a novel and powerful medium for artistic expression which can enable a new generation of creators to add exciting pages to the annals of Art history, according to the organization.
Therefore Visual 1st announced, with broad imaging industry support, it’s launching an open contest to pinpoint a term that will clearly and concisely refer to AI-generated photorealistic synthetic images. This term will enable those who create AI-based images to clearly identify their medium, and at the same time will remove confusion as to which images are derived from the real world and which are not.
But there’s a major caveat: At a time when conflictual politics and the virtualization of information are shaking our foundational attitudes and fracturing a formerly (mostly) homogenous consensus on the very nature of reality, our belief in the trustworthiness (albeit qualified) of photographs as witnesses to the real world is a valuable tool to enable our culture and our societies to function. And that belief is in the process of being severely undermined by Generative AI.
“The problem isn’t just the potential for fake images to be perceived as real; it’s also the risk that genuine images might be discredited,” says Paul Melcher, managing director, Melcher System.”While the first issue can result in deception, the second can breed distrust. Distrust is especially concerning because, in contrast to deception which affects a singular event and can be corrected, distrust erodes the core of a relationship and is often irreparable. Consequently, the credibility of all images, whether real or generated, becomes questionable.”
There is no question that AI creations are not photographs, since photographs are defined specifically as a photograph is an image created when a light-sensitive surface is exposed to light reflected from real-world objects. Yet because they convincingly appear to be photographs, and because no concise, expressive, and generally accepted term to designate them exists, they’re broadly being called “photographs”. The resulting confusion between real and unreal images is a serious problem, much of which will be addressed and resolved when such a term emerges.